346 research outputs found

    Combined administration of nicorandil and atorvastatin in patients with acute myocardial infarction after coronary intervention, and its effect on postoperative cardiac systolic function

    Get PDF
    Purpose: To study the effect of a combination of nicorandil and atorvastatin calcium in patients with acute myocardial infarction after coronary intervention, and its effect on postoperative cardiac systolic function of patients.Methods: Retrospective analysis was performed on 100 patients with acute myocardial infarctiontreated with coronary interventional therapy in The Third Affiliated Hospital of Qiqihaer MedicalUniversity from April 2019 to August 2020. The patients were randomised into control and study groups, with 50 patients in each group. The control group was treated with nicorandil, while the study group was treated with a combination of nicorandil and atorvastatin. Treatment response, cardiac structural indices, cardiac systolic function, blood lipid profiles, quality of life (QLI) score, Barthel Index (BI), Fugl- Meyer assessment (FMA), motor function score, incidence of adverse reactions, and blood pressure changes on days 1, 2, 3 and 4 after surgery, were compared between the two groups.Results: Treatment effectiveness, cardiac systolic function, QLI score, BI index and FMA motor function score in the study group were higher than the corresponding control values (p < 0.05). However, lower cardiac structure indices, blood lipid profiles and incidence of adverse reactions were greater in the study group than in the control group (p < 0.05). No significant disparity in blood pressure was found between the two groups on post-surgery days 1, 2, 3 and 4.Conclusion: The combination of nicorandil and atorvastatin calcium tablets produced better outcomes in patients with acute myocardial infarction after coronary intervention therapy; furthermore, the combination therapy significantly improved the cardiac systolic function of patients

    FabricFolding: Learning Efficient Fabric Folding without Expert Demonstrations

    Full text link
    Autonomous fabric manipulation is a challenging task due to complex dynamics and potential self-occlusion during fabric handling. An intuitive method of fabric folding manipulation first involves obtaining a smooth and unfolded fabric configuration before the folding process begins. However, the combination of quasi-static actions such as pick & place and dynamic action like fling proves inadequate in effectively unfolding long-sleeved T-shirts with sleeves mostly tucked inside the garment. To address this limitation, this paper introduces an improved quasi-static action called pick & drag, specifically designed to handle this type of fabric configuration. Additionally, an efficient dual-arm manipulation system is designed in this paper, which combines quasi-static (including pick & place and pick & drag) and dynamic fling actions to flexibly manipulate fabrics into unfolded and smooth configurations. Subsequently, keypoints of the fabric are detected, enabling autonomous folding. To address the scarcity of publicly available keypoint detection datasets for real fabric, we gathered images of various fabric configurations and types in real scenes to create a comprehensive keypoint dataset for fabric folding. This dataset aims to enhance the success rate of keypoint detection. Moreover, we evaluate the effectiveness of our proposed system in real-world settings, where it consistently and reliably unfolds and folds various types of fabrics, including challenging situations such as long-sleeved T-shirts with most parts of sleeves tucked inside the garment. Specifically, our method achieves a coverage rate of 0.822 and a success rate of 0.88 for long-sleeved T-shirts folding

    Rotation-Invariant Deep Embedding for Remote Sensing Images

    Get PDF
    Endowing convolutional neural networks (CNNs) with the rotation-invariant capability is important for characterizing the semantic contents of remote sensing (RS) images since they do not have typical orientations. Most of the existing deep methods for learning rotation-invariant CNN models are based on the design of proper convolutional or pooling layers, which aims at predicting the correct category labels of the rotated RS images equivalently. However, a few works have focused on learning rotation-invariant embeddings in the framework of deep metric learning for modeling the fine-grained semantic relationships among RS images in the embedding space. To fill this gap, we first propose a rule that the deep embeddings of rotated images should be closer to each other than those of any other images (including the images belonging to the same class). Then, we propose to maximize the joint probability of the leave-one-out image classification and rotational image identification. With the assumption of independence, such optimization leads to the minimization of a novel loss function composed of two terms: 1) a class-discrimination term and 2) a rotation-invariant term. Furthermore, we introduce a penalty parameter that balances these two terms and further propose a final loss to Rotation-invariant Deep embedding for RS images, termed RiDe. Extensive experiments conducted on two benchmark RS datasets validate the effectiveness of the proposed approach and demonstrate its superior performance when compared to other state-of-the-art methods. The codes of this article will be publicly available at https://github.com/jiankang1991/TGRS_RiDe

    PiCoCo: Pixelwise Contrast and Consistency Learning for Semisupervised Building Footprint Segmentation

    Get PDF
    Building footprint segmentation from high-resolution remote sensing (RS) images plays a vital role in urban planning, disaster response, and population density estimation. Convolutional neural networks (CNNs) have been recently used as a workhorse for effectively generating building footprints. However, to completely exploit the prediction power of CNNs, large-scale pixel-level annotations are required. Most state-of-the-art methods based on CNNs are focused on the design of network architectures for improving the predictions of building footprints with full annotations, while few works have been done on building footprint segmentation with limited annotations. In this article, we propose a novel semisupervised learning method for building footprint segmentation, which can effectively predict building footprints based on the network trained with few annotations (e.g., only 0.0324 km2 out of 2.25-km2 area is labeled). The proposed method is based on investigating the contrast between the building and background pixels in latent space and the consistency of predictions obtained from the CNN models when the input RS images are perturbed. Thus, we term the proposed semisupervised learning framework of building footprint segmentation as PiCoCo, which is based on the enforcement of Pixelwise Contrast and Consistency during the learning phase. Our experiments, conducted on two benchmark building segmentation datasets, validate the effectiveness of our proposed framework as compared to several state-of-the-art building footprint extraction and semisupervised semantic segmentation methods

    Multi-Risk-RRT: An Efficient Motion Planning Algorithm for Robotic Autonomous Luggage Trolley Collection at Airports

    Full text link
    Robots have become increasingly prevalent in dynamic and crowded environments such as airports and shopping malls. In these scenarios, the critical challenges for robot navigation are reliability and timely arrival at predetermined destinations. While existing risk-based motion planning algorithms effectively reduce collision risks with static and dynamic obstacles, there is still a need for significant performance improvements. Specifically, the dynamic environments demand more rapid responses and robust planning. To address this gap, we introduce a novel risk-based multi-directional sampling algorithm, Multi-directional Risk-based Rapidly-exploring Random Tree (Multi-Risk-RRT). Unlike traditional algorithms that solely rely on a rooted tree or double trees for state space exploration, our approach incorporates multiple sub-trees. Each sub-tree independently explores its surrounding environment. At the same time, the primary rooted tree collects the heuristic information from these sub-trees, facilitating rapid progress toward the goal state. Our evaluations, including simulation and real-world environmental studies, demonstrate that Multi-Risk-RRT outperforms existing unidirectional and bi-directional risk-based algorithms in planning efficiency and robustness

    DCP-Net: A Distributed Collaborative Perception Network for Remote Sensing Semantic Segmentation

    Full text link
    Onboard intelligent processing is widely applied in emergency tasks in the field of remote sensing. However, it is predominantly confined to an individual platform with a limited observation range as well as susceptibility to interference, resulting in limited accuracy. Considering the current state of multi-platform collaborative observation, this article innovatively presents a distributed collaborative perception network called DCP-Net. Firstly, the proposed DCP-Net helps members to enhance perception performance by integrating features from other platforms. Secondly, a self-mutual information match module is proposed to identify collaboration opportunities and select suitable partners, prioritizing critical collaborative features and reducing redundant transmission cost. Thirdly, a related feature fusion module is designed to address the misalignment between local and collaborative features, improving the quality of fused features for the downstream task. We conduct extensive experiments and visualization analyses using three semantic segmentation datasets, including Potsdam, iSAID and DFC23. The results demonstrate that DCP-Net outperforms the existing methods comprehensively, improving mIoU by 2.61%~16.89% at the highest collaboration efficiency, which promotes the performance to a state-of-the-art level

    Indoor Exploration and Simultaneous Trolley Collection Through Task-Oriented Environment Partitioning

    Full text link
    In this paper, we present a simultaneous exploration and object search framework for the application of autonomous trolley collection. For environment representation, a task-oriented environment partitioning algorithm is presented to extract diverse information for each sub-task. First, LiDAR data is classified as potential objects, walls, and obstacles after outlier removal. Segmented point clouds are then transformed into a hybrid map with the following functional components: object proposals to avoid missing trolleys during exploration; room layouts for semantic space segmentation; and polygonal obstacles containing geometry information for efficient motion planning. For exploration and simultaneous trolley collection, we propose an efficient exploration-based object search method. First, a traveling salesman problem with precedence constraints (TSP-PC) is formulated by grouping frontiers and object proposals. The next target is selected by prioritizing object search while avoiding excessive robot backtracking. Then, feasible trajectories with adequate obstacle clearance are generated by topological graph search. We validate the proposed framework through simulations and demonstrate the system with real-world autonomous trolley collection tasks

    Elevation Estimation-Driven Building 3D Reconstruction from Single-View Remote Sensing Imagery

    Full text link
    Building 3D reconstruction from remote sensing images has a wide range of applications in smart cities, photogrammetry and other fields. Methods for automatic 3D urban building modeling typically employ multi-view images as input to algorithms to recover point clouds and 3D models of buildings. However, such models rely heavily on multi-view images of buildings, which are time-intensive and limit the applicability and practicality of the models. To solve these issues, we focus on designing an efficient DSM estimation-driven reconstruction framework (Building3D), which aims to reconstruct 3D building models from the input single-view remote sensing image. First, we propose a Semantic Flow Field-guided DSM Estimation (SFFDE) network, which utilizes the proposed concept of elevation semantic flow to achieve the registration of local and global features. Specifically, in order to make the network semantics globally aware, we propose an Elevation Semantic Globalization (ESG) module to realize the semantic globalization of instances. Further, in order to alleviate the semantic span of global features and original local features, we propose a Local-to-Global Elevation Semantic Registration (L2G-ESR) module based on elevation semantic flow. Our Building3D is rooted in the SFFDE network for building elevation prediction, synchronized with a building extraction network for building masks, and then sequentially performs point cloud reconstruction, surface reconstruction (or CityGML model reconstruction). On this basis, our Building3D can optionally generate CityGML models or surface mesh models of the buildings. Extensive experiments on ISPRS Vaihingen and DFC2019 datasets on the DSM estimation task show that our SFFDE significantly improves upon state-of-the-arts. Furthermore, our Building3D achieves impressive results in the 3D point cloud and 3D model reconstruction process

    RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework

    Full text link
    In recent years, remote sensing (RS) vision foundation models such as RingMo have emerged and achieved excellent performance in various downstream tasks. However, the high demand for computing resources limits the application of these models on edge devices. It is necessary to design a more lightweight foundation model to support on-orbit RS image interpretation. Existing methods face challenges in achieving lightweight solutions while retaining generalization in RS image interpretation. This is due to the complex high and low-frequency spectral components in RS images, which make traditional single CNN or Vision Transformer methods unsuitable for the task. Therefore, this paper proposes RingMo-lite, an RS multi-task lightweight network with a CNN-Transformer hybrid framework, which effectively exploits the frequency-domain properties of RS to optimize the interpretation process. It is combined by the Transformer module as a low-pass filter to extract global features of RS images through a dual-branch structure, and the CNN module as a stacked high-pass filter to extract fine-grained details effectively. Furthermore, in the pretraining stage, the designed frequency-domain masked image modeling (FD-MIM) combines each image patch's high-frequency and low-frequency characteristics, effectively capturing the latent feature representation in RS data. As shown in Fig. 1, compared with RingMo, the proposed RingMo-lite reduces the parameters over 60% in various RS image interpretation tasks, the average accuracy drops by less than 2% in most of the scenes and achieves SOTA performance compared to models of the similar size. In addition, our work will be integrated into the MindSpore computing platform in the near future
    corecore